流行病学中的数学模型是一种不可或缺的工具,可以确定传染病的动态和重要特征。除了他们的科学价值之外,这些模型通常用于在正在进行的爆发期间提供政治决策和干预措施。然而,通过将复杂模型连接到真实数据来可靠地推断正在进行的爆发的动态仍然很难,并且需要费力的手动参数拟合或昂贵的优化方法,这些方法必须从划痕中重复给定模型的每个应用。在这项工作中,我们用专门的神经网络的流行病学建模的新组合来解决这个问题。我们的方法需要两个计算阶段:在初始训练阶段中,描述该流行病的数学模型被用作神经网络的教练,该主管是关于全球可能疾病动态的全球知识。在随后的推理阶段,训练有素的神经网络处理实际爆发的观察到的数据,并且揭示了模型的参数,以便实际地再现观察到的动态并可可靠地预测未来的进展。通过其灵活的框架,我们的仿真方法适用于各种流行病学模型。此外,由于我们的方法是完全贝叶斯的,它旨在纳入所有可用的关于合理参数值的先前知识,并返回这些参数上的完整关节后部分布。我们的方法在德国的早期Covid-19爆发阶段的应用表明,我们能够获得可靠的概率估计对重要疾病特征,例如生成时间,未检测到的感染部分,症状发作前的传播可能性,以及报告延迟非常适中的现实观测。
translated by 谷歌翻译
基于脑部的事件的神经形态处理系统已成为一种有前途的技术,尤其是生物医学电路和系统。但是,神经网络的神经形态和生物学实现都具有关键的能量和记忆约束。为了最大程度地减少在多核神经形态处理器中的内存资源的使用,我们提出了一种受生物神经网络启发的网络设计方法。我们使用这种方法来设计针对小世界网络优化的新路由方案,同时介绍了一种硬件感知的放置算法,该算法优化了针对小型世界网络模型的资源分配。我们使用规范的小世界网络验证算法,并为其他网络提供初步结果
translated by 谷歌翻译
现实世界数据库很复杂,它们通常会呈现冗余,并在同一数据的异质和多个表示之间共享相关性。因此,在视图之间利用和解开共享信息至关重要。为此,最近的研究经常将所有观点融合到共享的非线性复杂潜在空间中,但它们失去了解释性。为了克服这一局限性,我们在这里提出了一种新的方法,将多个变异自动编码器(VAE)结构与因子分析潜在空间(FA-VAE)相结合。具体而言,我们使用VAE在连续的潜在空间中学习每个异质观点的私人表示。然后,我们通过使用线性投影矩阵将每个私有变量投影到低维的潜在空间来对共享潜在空间进行建模。因此,我们在私人信息和共享信息之间创建了可解释的层次依赖性。这样,新型模型可以同时:(i)从多种异质观点中学习,(ii)获得可解释的层次共享空间,以及(iii)在生成模型之间执行传输学习。
translated by 谷歌翻译
我们从一组稀疏的光谱时间序列中构建了一个物理参数化的概率自动编码器(PAE),以学习IA型超新星(SNE IA)的内在多样性。 PAE是一个两阶段的生成模型,由自动编码器(AE)组成,该模型在使用归一化流(NF)训练后概率地解释。我们证明,PAE学习了一个低维的潜在空间,该空间可捕获人口内存在的非线性特征范围,并且可以直接从数据直接从数据中准确地对整个波长和观察时间进行精确模拟SNE IA的光谱演化。通过引入相关性惩罚项和多阶段训练设置以及我们的物理参数化网络,我们表明可以在训练期间分离内在和外在的可变性模式,从而消除了需要进行额外标准化的其他模型。然后,我们在SNE IA的许多下游任务中使用PAE进行越来越精确的宇宙学分析,包括自动检测SN Outliers,与数据分布一致的样本的产生以及在存在噪音和不完整数据的情况下解决逆问题限制宇宙距离测量。我们发现,与以前的研究相一致的最佳固有模型参数数量似乎是三个,并表明我们可以用$ 0.091 \ pm 0.010 $ mag标准化SNE IA的测试样本,该样本对应于$ 0.074 \ pm。 0.010 $ mag如果删除了特殊的速度贡献。训练有素的模型和代码在\ href {https://github.com/georgestein/supaernova} {github.com/georgestein/supaernova}上发布
translated by 谷歌翻译
机器学习技术通常应用于痴呆症预测缺乏其能力,共同学习多个任务,处理时间相关的异构数据和缺失值。在本文中,我们建议使用最近呈现的SShiba模型提出了一个框架,用于在缺失值的纵向数据上联合学习不同的任务。该方法使用贝叶斯变分推理来赋予缺失值并组合多个视图的信息。这样,我们可以将不同的数据视图与共同的潜在空间中的不同时间点相结合,并在同时建模和预测若干输出变量的同时学习每个时间点之间的关系。我们应用此模型以预测痴呆症中的诊断,心室体积和临床评分。结果表明,SSHIBA能够学习缺失值的良好归因,同时预测三个不同任务的同时表现出基线。
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译
We present the interpretable meta neural ordinary differential equation (iMODE) method to rapidly learn generalizable (i.e., not parameter-specific) dynamics from trajectories of multiple dynamical systems that vary in their physical parameters. The iMODE method learns meta-knowledge, the functional variations of the force field of dynamical system instances without knowing the physical parameters, by adopting a bi-level optimization framework: an outer level capturing the common force field form among studied dynamical system instances and an inner level adapting to individual system instances. A priori physical knowledge can be conveniently embedded in the neural network architecture as inductive bias, such as conservative force field and Euclidean symmetry. With the learned meta-knowledge, iMODE can model an unseen system within seconds, and inversely reveal knowledge on the physical parameters of a system, or as a Neural Gauge to "measure" the physical parameters of an unseen system with observed trajectories. We test the validity of the iMODE method on bistable, double pendulum, Van der Pol, Slinky, and reaction-diffusion systems.
translated by 谷歌翻译
While the brain connectivity network can inform the understanding and diagnosis of developmental dyslexia, its cause-effect relationships have not yet enough been examined. Employing electroencephalography signals and band-limited white noise stimulus at 4.8 Hz (prosodic-syllabic frequency), we measure the phase Granger causalities among channels to identify differences between dyslexic learners and controls, thereby proposing a method to calculate directional connectivity. As causal relationships run in both directions, we explore three scenarios, namely channels' activity as sources, as sinks, and in total. Our proposed method can be used for both classification and exploratory analysis. In all scenarios, we find confirmation of the established right-lateralized Theta sampling network anomaly, in line with the temporal sampling framework's assumption of oscillatory differences in the Theta and Gamma bands. Further, we show that this anomaly primarily occurs in the causal relationships of channels acting as sinks, where it is significantly more pronounced than when only total activity is observed. In the sink scenario, our classifier obtains 0.84 and 0.88 accuracy and 0.87 and 0.93 AUC for the Theta and Gamma bands, respectively.
translated by 谷歌翻译
Variational autoencoders model high-dimensional data by positing low-dimensional latent variables that are mapped through a flexible distribution parametrized by a neural network. Unfortunately, variational autoencoders often suffer from posterior collapse: the posterior of the latent variables is equal to its prior, rendering the variational autoencoder useless as a means to produce meaningful representations. Existing approaches to posterior collapse often attribute it to the use of neural networks or optimization issues due to variational approximation. In this paper, we consider posterior collapse as a problem of latent variable non-identifiability. We prove that the posterior collapses if and only if the latent variables are non-identifiable in the generative model. This fact implies that posterior collapse is not a phenomenon specific to the use of flexible distributions or approximate inference. Rather, it can occur in classical probabilistic models even with exact inference, which we also demonstrate. Based on these results, we propose a class of latent-identifiable variational autoencoders, deep generative models which enforce identifiability without sacrificing flexibility. This model class resolves the problem of latent variable non-identifiability by leveraging bijective Brenier maps and parameterizing them with input convex neural networks, without special variational inference objectives or optimization tricks. Across synthetic and real datasets, latent-identifiable variational autoencoders outperform existing methods in mitigating posterior collapse and providing meaningful representations of the data.
translated by 谷歌翻译
There are multiple scales of abstraction from which we can describe the same image, depending on whether we are focusing on fine-grained details or a more global attribute of the image. In brain mapping, learning to automatically parse images to build representations of both small-scale features (e.g., the presence of cells or blood vessels) and global properties of an image (e.g., which brain region the image comes from) is a crucial and open challenge. However, most existing datasets and benchmarks for neuroanatomy consider only a single downstream task at a time. To bridge this gap, we introduce a new dataset, annotations, and multiple downstream tasks that provide diverse ways to readout information about brain structure and architecture from the same image. Our multi-task neuroimaging benchmark (MTNeuro) is built on volumetric, micrometer-resolution X-ray microtomography images spanning a large thalamocortical section of mouse brain, encompassing multiple cortical and subcortical regions. We generated a number of different prediction challenges and evaluated several supervised and self-supervised models for brain-region prediction and pixel-level semantic segmentation of microstructures. Our experiments not only highlight the rich heterogeneity of this dataset, but also provide insights into how self-supervised approaches can be used to learn representations that capture multiple attributes of a single image and perform well on a variety of downstream tasks. Datasets, code, and pre-trained baseline models are provided at: https://mtneuro.github.io/ .
translated by 谷歌翻译